35 research outputs found

    Optimal Jammer Placement in UAV-assisted Relay Networks

    Full text link
    We consider the relaying application of unmanned aerial vehicles (UAVs), in which UAVs are placed between two transceivers (TRs) to increase the throughput of the system. Instead of studying the placement of UAVs as pursued in existing literature, we focus on investigating the placement of a jammer or a major source of interference on the ground to effectively degrade the performance of the system, which is measured by the maximum achievable data rate of transmission between the TRs. We demonstrate that the optimal placement of the jammer is in general a non-convex optimization problem, for which obtaining the solution directly is intractable. Afterward, using the inherent characteristics of the signal-to-interference ratio (SIR) expressions, we propose a tractable approach to find the optimal position of the jammer. Based on the proposed approach, we investigate the optimal positioning of the jammer in both dual-hop and multi-hop UAV relaying settings. Numerical simulations are provided to evaluate the performance of our proposed method.Comment: 6 pages, 6 figure

    Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds

    Full text link
    A recent emphasis of distributed learning research has been on federated learning (FL), in which model training is conducted by the data-collecting devices. Existing research on FL has mostly focused on a star topology learning architecture with synchronized (time-triggered) model training rounds, where the local models of the devices are periodically aggregated by a centralized coordinating node. However, in many settings, such a coordinating node may not exist, motivating efforts to fully decentralize FL. In this work, we propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over the network graph topology. We consider heterogeneous communication event thresholds at each device that weigh the change in local model parameters against the available local resources in deciding the benefit of aggregations at each iteration. Through theoretical analysis, we demonstrate that our methodology achieves asymptotic convergence to the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature, and without restrictive connectivity requirements on the underlying topology. Subsequent numerical results demonstrate that our methodology obtains substantial improvements in communication requirements compared with FL baselines.Comment: 8 page

    Event-Triggered Decentralized Federated Learning over Resource-Constrained Edge Devices

    Full text link
    Federated learning (FL) is a technique for distributed machine learning (ML), in which edge devices carry out local model training on their individual datasets. In traditional FL algorithms, trained models at the edge are periodically sent to a central server for aggregation, utilizing a star topology as the underlying communication graph. However, assuming access to a central coordinator is not always practical, e.g., in ad hoc wireless network settings. In this paper, we develop a novel methodology for fully decentralized FL, where in addition to local training, devices conduct model aggregation via cooperative consensus formation with their one-hop neighbors over the decentralized underlying physical network. We further eliminate the need for a timing coordinator by introducing asynchronous, event-triggered communications among the devices. In doing so, to account for the inherent resource heterogeneity challenges in FL, we define personalized communication triggering conditions at each device that weigh the change in local model parameters against the available local resources. We theoretically demonstrate that our methodology converges to the globally optimal learning model at a O(ln⁑kk)O{(\frac{\ln{k}}{\sqrt{k}})} rate under standard assumptions in distributed learning and consensus literature. Our subsequent numerical evaluations demonstrate that our methodology obtains substantial improvements in convergence speed and/or communication savings compared with existing decentralized FL baselines.Comment: 23 pages. arXiv admin note: text overlap with arXiv:2204.0372
    corecore